To date, regional atrial strains have not been imaged in vivo, despite their potential to provide useful clinical information. To address this gap, we present a novel CINE MRI protocol capable of imaging the entire left atrium at an isotropic 2-mm resolution in one single breath-hold. As proof of principle, we acquired data in 10 healthy volunteers and 2 cardiovascular patients using this technique. We also demonstrated how regional atrial strains can be estimated from this data following a manual segmentation of the left atrium using automatic image tracking techniques. The estimated principal strains vary smoothly across the left atrium and have a similar magnitude to estimates reported in the literature.Cardiac magnetic resonance (MR) tissue tagging offers an excellent solution for tracking deformation and is considered the reference standard for the quantification of strain. However, due to the requirements for a dedicated acquisition sequence and post-processing software, tagged MR acquisitions are performed much less frequently in routine clinical practice than the anatomical cine MR sequence. Using tagged MR as the reference standard, this study proposes an approach to evaluate a diffeomorphic image registration algorithm applied on cine MR images to compute the cardiac deformation. In contrast to previous evaluation methods that compared the final results, such as strain, computed from cine and tagged MR sequences, the proposed method performs a direct frame-to-frame comparison in the evaluation. To overcome the problem of misalignment between the tagged and cine MR images, the proposed approach performs transformations to and from the two-dimensional image pixel coordinates and three-dimensional space using the meta-information encoded in the MR images. Linear temporal interpolation is performed using the frame acquisition time since the last R-wave peak value of the electrocardiogram signal recorded in the meta-information. Several statistic measures are computed and reported for the registration error using the Euclidean distances between the corresponding set of points obtained using cine and tagged MR images.The main curative treatment for localized colon cancer is surgical resection. However when tumor residuals are left positive margins are found during the histological examinations and additional treatment is needed to inhibit recurrence. Hyperspectral imaging (HSI) can offer non-invasive surgical guidance with the potential of optimizing the surgical effectiveness. In this paper we investigate the capability of HSI for automated colon cancer detection in six ex-vivo specimens employing a spectral-spatial patch-based classification approach. The results demonstrate the feasibility in assessing the benign and malignant boundaries of the lesion with a sensitivity of 0.88 and specificity of 0.78. The results are compared with the state-of-the-art deep learning based approaches. The method with a new hybrid CNN outperforms the state-of the-art approaches (0.74 vs. 0.82 AUC). This study paves the way for further investigation towards improving surgical outcomes with HSI.Osteosarcoma is a prominent bone cancer that typically affects adolescents or people in late adulthood. https://www.selleckchem.com/JAK.html Early recognition of this disease relies on imaging technologies such as x-ray radiography to detect tumor size and location. This paper aims to differentiate osteosarcoma from benign tumors by analyzing both imaging and RNA-seq data through a combination of image processing and machine learning. In experimental results, the proposed method achieved an Area Under the Receiver Operator Characteristic Curve (AUC) of 0.7272 in three-fold cross-validation, and an AUC of 0.9015 using leave-one-out cross-validation.As Deep Convolutional Neural Networks (DCNNs) have shown robust performance and results in medical image analysis, a number of deep-learning-based tumor detection methods were developed in recent years. Nowadays, the automatic detection of pancreatic tumors using contrast-enhanced Computed Tomography (CT) is widely applied for the diagnosis and staging of pancreatic cancer. Traditional hand-crafted methods only extract low-level features. Normal convolutional neural networks, however, fail to make full use of effective context information, which causes inferior detection results. In this paper, a novel and efficient pancreatic tumor detection framework aiming at fully exploiting the context information at multiple scales is designed. More specifically, the contribution of the proposed method mainly consists of three components Augmented Feature Pyramid networks, Self-adaptive Feature Fusion and a Dependencies Computation (DC) Module. A bottom-up path augmentation to fully extract and propagate low-level accurate localization information is established firstly. Then, the Self-adaptive Feature Fusion can encode much richer context information at multiple scales based on the proposed regions. Finally, the DC Module is specifically designed to capture the interaction information between proposals and surrounding tissues. Experimental results achieve competitive performance in detection with the AUC of 0.9455, which outperforms other state-of-the-art methods to our best of knowledge, demonstrating the proposed framework can detect the tumor of pancreatic cancer efficiently and accurately.Detection, diagnosis, and removal of colorectal neoplasms are well-accepted colorectal cancer prevention methods. Although promising endoscopic imaging techniques including narrow-band imaging have been developed, these techniques are operator-dependent and interpretations of the results may vary. To overcome these limitations, we applied deep learning to develop a computer-aided diagnostic (CAD) system of colorectal adenoma. We collected and divided 3000 colonoscopic images into 4 categories according to the final pathology, normal, low-grade dysplasia, high-grade dysplasia, and adenocarcinoma. We implemented three convolutional neural networks (CNNs) using Inception-v3, ResNet-50, and DenseNet-161 as baseline models. We further altered the models using several strategies replacement of the top layer, transfer learning from pre-trained models, fine-tuning of the model weights, rebalancing and augmentation of the training data, and 10-fold cross-validation. We compared the outcomes of the three CNN models to those of two endoscopist groups having different years of experience, and visualized the model predictions using Class Activation Mapping (CAM).